77 research outputs found

    An alternative to field-normalization in the aggregation of heterogeneous scientific fields

    Get PDF
    A possible solution to the problem of aggregating heterogeneous fields in the all-sciences case relies on the normalization of the raw citations received by all publications. In this paper, we study an alternative solution that does not require any citation normalization. Provided one uses sizeand scale-independent indicators, the citation impact of any research unit can be calculated as the average (weighted by the publication output) of the citation impact that the unit achieves in all fields. The two alternatives are confronted when the research output of the 500 universities in the 2013 edition of the CWTS Leiden Ranking is evaluated using two citation impact indicators with very different properties. We use a large Web of Science dataset consisting of 3.6 million articles published in the 2005-2008 period, and a classification system distinguishing between 5,119 clusters. The main two findings are as follows. Firstly, differences in production and citation practices between the 3,332 clusters with more than 250 publications account for 22.5% of the overall citation inequality. After the standard field-normalization procedure where cluster mean citations are used as normalization factors, this figure is reduced to 4.3%. Secondly, the differences between the university rankings according to the two solutions for the all-sciences aggregation problem are of a small order of magnitude for both citation impact indicators.Ruiz-Castillo acknowledges financial support from the Spanish MEC through grant ECO2011-29762

    Multiplicative versus fractional counting methods for co-authored publications : the case of the 500 universities in the Leiden ranking

    Get PDF
    This paper studies the assignment of responsibility to the participants in the case of co-authored scientific publications. In the conceptual part, we establish that the key shortcoming of the full counting method is its incompatibility with the use of additively decomposable citation impact indicators. In the empirical part of the paper, we study the consequences of adopting the address-line fractional or multiplicative counting method. For this purpose, we use a Web of Science dataset consisting of 3.6 million articles published in the 2005-2008 period, and classified into 5,119 clusters. Our research units are the 500 universities in the 2013 edition of the CWTS Leiden Ranking. Citation impact is measured using the Mean Normalized Citation Score, and the Top 10% indicators. The main findings are the following. Firstly, although a change of counting methods alters co-authorship and citation impact patterns, cardinal differences between co-authorship rates and between citation impact values are generally small. Nevertheless, such small differences generate considerable re-rankings between universities. Secondly, the universities that are more penalized by the adoption of a fractional rather than a multiplicative approach are those with a small co-authorship rate for the citation distribution as a whole, a large co-authorship rate in the upper tail of this distribution, a low citation impact performance, and a small number of solo publications.Ruiz-Castillo acknowledges financial support from the Spanish MEC through grant ECO2014-55953-P

    Within and across department variability in individual productivity : the case of economics

    Get PDF
    University departments (or research institutes) are the governance units in any scientific field where the demand for and the supply of researchers interact. As a first step towards a formal model of this process, this paper investigates the characteristics of productivity distributions of a population of 2,530 individuals with at least one publication who were working in 81 world top Economics departments in 2007. Individual productivity is measured in two ways: as the number of publications until 2007, and as a quality index that weights differently the articles published in four journal equivalent classes. The academic age of individuals, measured as the number of years since obtaining the PhD until 2007, is used to measure productivity per year. Independently of the two productivity measures, and both before and after age normalization, the main findings of the paper are the following five. Firstly, individuals within each department have very different productivities. Secondly, there is not a single pattern of productivity inequality and skewness at the department level. On the contrary, productivity distributions are very different across departments. Thirdly, the effect on overall productivity inequality of differences in productivity distributions across departments is greater than the analogous effect in other contexts. Fourthly, to a large extent, this effect on overall productivity inequality is accounted for by scale factors well captured by departments’ mean productivities. Fifthly, this high degree of departmental heterogeneity is found to be compatible with greater homogeneity across the members of a partition of the sample into seven countries and a residual category.Ruiz-Castillo acknowledges financial support from the Spanish MEC through grant SEJ2007-6743

    A Multi-Level Analysis of World Scientific Output in Pharmacology

    Get PDF
    The purpose of this chapter is to analyse international research in “pharmacology, toxicology and pharmaceutics” (hereafter pharmacology) on the basis of the scientific papers listed in the Scopus multidisciplinary database. This primary objective is reached by answering the following questions (in the section on results). What weight does the subject area “pharmacology, toxicology and pharmaceutics” carry in world-wide science? What is the percentage contribution made by the various regions of the world to the subject area “pharmacology, toxicology and pharmaceutics”? Can certain regions be identified as leaders on that basis, as in other scientific contexts? Are emerging countries present in the field? Do the most productive countries also publish the largest number of journals? What features characterise the scientific output of companies that publish pharmacological papers

    Measuring research performance in international collaboration

    Get PDF
    Chinchilla-Rodríguez, Zaida; Miguel, Sandra; Perianes-Rodríguez, Antonio. (2016). Measuring research performance in international collaboration. 14th International Congress of Information, Info '2016. La Havana, Cuba, October 31- November 4, 2016.International collaboration in the creation of knowledge is responsible to change the structural stratification of science having profound implications for the governance of science. Analysis of collaboration in Latin American and Caribbean countries is of particular significance, because initiatives are often the result of “research-for-aid” arrangements, generally based on North–South asymmetries. However, collaboration for mutual benefit and excellence has gained increasing acceptance, with “partner” selection becoming a strategic priority to enhance one’s own production. The general aim of this study is to quantify the benefit rate in visibility and impact of scientific production in the field of nanoscience and nanotechnology (NST) bearing in mind the different types of output (total, in leadership, excellent, and excellent with leadership) of the six main producers of knowledge in NST in Latin America in the period 2003-2013. More specifically we aspire to visualize the networks of international collaboration in a given country (ego-network) to represent the difference between the citations received per type of output, and identify the associates with whom a country has greater potential and capacity to generate knowledge of high quality, as well as the differences existing in terms of visibility depending on the type of production analyzed. In short, we wish to determine the benefits of such collaborative efforts. In this way we could respond to questions such as: a) With which countries is collaboration established? and b) With which collaborating countries are the greatest volume of citations per document obtained, according to the type of output.This work was made possible through financing by the Project NANOMETRICS (Ref. CSO2014-57770-R) supported by Ministerio de Economía y Competitividad of SpainPeer reviewe

    Within- and between-department variability in individual productivity. The case of Economics

    Get PDF
    There are two types of research units whose performance is usually investigated in one or several scientific fields: individuals (or publications), or larger units such as universities or entire countries. In contrast, the information about the university departments (or research institutes) is not easy to come by. This is important because, in the social sciences, university departments are the governance units where the demand for and the supply of researchers determine an equilibrium allocation of scholars to institutions. This paper uses a unique dataset consisting of all individuals working in 2007 in the top 81 Economics departments in the world according to the Econphd university ranking

    A comparison of the Web of Science with publication-level classification systems of Science

    Get PDF
    In this paper we propose a new criterion for choosing between a pair of classification systems of science that assign publications (or journals) to a set of clusters. Consider the standard target (citedside) normalization procedure in which cluster mean citations are used as normalization factors. We recommend system A over system B whenever the standard normalization procedure based on system A performs better than the standard normalization procedure based on system B. Performance is assessed in terms of two double tests &-one graphical, and one numerical&- that use both classification systems for evaluation purposes. In addition, a pair of classification systems is compared using a third, independent classification system for evaluation purposes. We illustrate this strategy by comparing a Web of Science journal-level classification system, consisting of 236 journal subject categories, with two publication-level algorithmically constructed classification systems consisting of 1,363 and 5,119 clusters. There are two main findings. Firstly, the second publication-level system is found to dominate the first. Secondly, the publication-level system at the highest granularity level and the Web of Science journal-level system are found to be non-comparable. Nevertheless, we find reasons to recommend the publication-level option.This research project builds on earlier work started by Antonio Perianes- Rodriguez during a research visit to the Centre for Science and Technology Studies (CWTS) of Leiden University as awardee of José Castillejo grant, CAS15/00178, funded by the Spanish MEC. Ruiz- Castillo is a visiting researcher at CWTS and gratefully acknowledges CWTS for the use of its data. Ruiz-Castillo acknowledges financial support from the Spanish MEC through grant ECO2014-55953- P, as well as grant MDM 2014-0431 to his Departamento de Economía

    The impact of classification systems in the evaluation of the research performance of the Leiden Ranking Universities

    Get PDF
    In this paper, we investigate the consequences of choosing different classification systems – namely, the way publications (or journals) are assigned to scientific fields– for the ranking of research units. We study the impact of this choice on the ranking of 500 universities in the 2013 edition of the Leiden Ranking in two cases. Firstly, we compare a Web of Science journal-level classification system, consisting of 236 subject categories, and a publication-level algorithmically constructed system, denoted G8, consisting of 5,119 clusters. The result is that the consequences of the move from the WoS to the G8 system using the Top 1% citation impact indicator are much greater than the consequences of this move using the Top 10% indicator. In the second place, we compare the G8 classification system and a publication-level alternative of the same family, the G6 system, consisting of 1,363 clusters. The result is that, although less important than in the previous case, the consequences of the move from the G6 to the G8 system under the Top 1% indicator are still of a large order of magnitude.This research project builds on earlier work started by Antonio Perianes-Rodriguez during a research visit to the Centre for Science and Technology Studies (CWTS) of Leiden University as awardee of José Castillejo grant, CAS15/00178, funded by the Spanish MEC. Ruiz-Castillo is a visiting researcher at CWTS and gratefully acknowledges CWTS for the use of its data. Ruiz-Castillo acknowledges financial support from the Spanish MEC through grant ECO2014-55953-P, as well as grant MDM 2014-0431 to his Departamento de Economía

    University citation distributions

    Get PDF
    We investigate the citation distributions of the 500 universities in the 2013 edition of the Leiden Ranking produced by The Centre for Science and Technological Studies. We use a Web of Science data set consisting of 3.6 million articles published in 2003 to 2008 and classified into 5,119 clusters. The main findings are the following. First, the universality claim, according to which all university-citation distributions, appropriately normalized, follow a single functional form, is not supported by the data. Second, the 500 university citation distributions are all highly skewed and very similar. Broadly speaking, university citation distributions appear to behave as if they differ by a relatively constant scale factor over a large, intermediate part of their support. Third, citation-impact differences between universities account for 3.85% of overall citation inequality. This percentage is greatly reduced when university citation distributions are normalized using their mean normalized citation scores (MNCSs) as normalization factors. Finally, regarding practical consequences, we only need a single explanatory model for the type of high skewness characterizing all university citation distributions, and the similarity of university citation distributions goes a long way in explaining the similarity of the university rankings obtained with the MNCS and the Top 10% indicator

    Differences in citation impact across countries

    Get PDF
    Using a large dataset, indexed by Thomson Reuters, consisting of 4.4 million articles published in 1998-2003 with a five-year citation window for each year, this paper studies country citation distributions in a partition of the world into 36 countries and two geographical areas in the all-sciences case and eight broad scientific fields. The key findings are the following two. Firstly, the shape of country citation distributions is highly skewed and very similar to each other across all fields. Secondly, differences in country citation distributions appear to have a strong scale factor component. The implication is that, in spite of the skewness of citation distributions, international comparisons of citation impact in terms of country mean citations capture well such scale factors. The empirical scenario described in the paper helps understanding why, in each field and the all-sciences case, the country rankings according to (i) mean citations and (ii) the percentage of articles in each country belonging to the set formed by the 10% of the more highly cited papers are so similar to each other.Albarrán acknowledges additional financial support from the Spanish MEC through grants ECO2009-11165 and ECO2011-29751, and Ruiz-Castillo through grant SEJ2007-67436
    • …
    corecore